Understanding the Structure and Transformation of Tensors in Spacetime

In summary: in summary, the metric in spacetime is described by a 4-vector, points in spacetime are represented as events, and there is structure on spacetime in the form of light cones.
  • #36
Ok, so I am pretty sure I understand what covariant and contravariant tensors are. A covariant tensor or type (0,2) is a map

[tex]T:V\times V \rightarrow \mathbb{R}[/tex]

Now if you take, say a two dimensional vector space [itex]V[/itex] which has two basis components [itex]\{e_1,e_2\}[/itex] then the dual vector space [itex]V^*[/itex], which (as robphy and pervect pointed out) has the same dimension as [itex]V[/itex], then has two basis components. Now am I correct in assuming that the basis components of the dual space is written in Greek? As in, [itex]\{\epsilon_1,\epsilon_2\}[/itex]?

So a vector [itex]\omega \in V^*[/itex] may be written as a sum of it's basis components: [itex]\omega = \alpha\epsilon_1 + \beta\epsilon_2[/itex]?

Extending this idea to [itex]n[/itex]-dimensional vector spaces we have that [itex]e_1,e_2,\dots,e_n[/itex] is a basis for [itex]V[/itex] and [itex]\epsilon_1,\epsilon_2,\dots,\epsilon_n[/itex] is a basis for [itex]V^*[/itex].

As we have already discussed I assume that when writing, say, the product of the two basis vectors [itex](\epsilon)(e)[/itex] and by including the indices we would write

[tex]\epsilon^i(e_j)[/tex]

So I would write the [itex]i[/itex] index superscripted because the [itex]\epsilon[/itex] basis vector came from the dual vector space, and the [itex]j[/itex] index is subscripted because [itex]e_j[/itex] came from the vector space. Is this the reason for superscripting and subscripting indices - to make a distinction about which space we are in? Because after all, they are by no means identical bases, even if the vector space and its dual are equal?

My last question for now is, why is the product

[tex]\langle \epsilon^i,e_j \rangle = \delta_j^i[/tex]

equal to the Kronecker delta? The Kronecker delta equal 1 if the indices are the same, and zero if the indices are different. Let's say that the vector space, [itex]V[/itex] has [itex]n[/itex] dimensions and the dual space, [itex]V^*[/itex] has [itex]m[/itex] dimensions. Then,

[tex]\delta_j^i = 1 + 1 + \dots + 1 + 1^* + 1^* + \dots + 1^*[/tex]

where there are [itex]n[/itex] 1's in the first sum and [itex]m[/itex] 1*'s in the second sum. Therefore

[tex]\delta_j^i = n + m = \dim(V) + \dim(V^*)[/tex]

which should equal the product of the basis vectors, [itex]\epsilon^i[/itex] and [itex]e_j[/itex]. Could this be the reason?
 
Physics news on Phys.org
  • #37
Oxymoron said:
Ok, so I am pretty sure I understand what covariant and contravariant tensors are. A covariant tensor or type (0,2) is a map
[tex]T:V\times V \rightarrow \mathbb{R}[/tex]

Now if you take, say a two dimensional vector space [itex]V[/itex] which has two basis components [itex]\{e_1,e_2\}[/itex] then the dual vector space [itex]V^*[/itex], which (as robphy and pervect pointed out) has the same dimension as [itex]V[/itex], then has two basis components. Now am I correct in assuming that the basis components of the dual space is written in Greek? As in, [itex]\{\epsilon_1,\epsilon_2\}[/itex]?

Usually basis one-forms are written as [itex]\{\omega^1, \omega^2 \} [/itex], a different greek letter choice, and more importantly superscripted rather than subscripted.

So a vector [itex]\omega \in V^*[/itex] may be written as a sum of it's basis components: [itex]\omega = \alpha\epsilon_1 + \beta\epsilon_2[/itex]?

If you write out a vector as a linear sum of multiples of the basis vectors as you do above, it's traditional to write simply

[itex]x^i \, e_i[/itex]. Repeating the index i implies a summation, i.e.

[tex]\sum_{i=1}^{n} x^i e_i [/tex]

Extending this idea to [itex]n[/itex]-dimensional vector spaces we have that [itex]e_1,e_2,\dots,e_n[/itex] is a basis for [itex]V[/itex] and [itex]\epsilon_1,\epsilon_2,\dots,\epsilon_n[/itex] is a basis for [itex]V^*[/itex].

If you write out a one-form in terms of the basis one-forms, it's
[itex]x_i \omega^i[/itex]

Is this the reason for superscripting and subscripting indices - to make a distinction about which space we are in? Because after all, they are by no means identical bases, even if the vector space and its dual are equal?

Yes. It also leads to fairly intuitive tensor manipulation rules when you get used to the notation.

My last question for now is, why is the product
[tex]\langle \epsilon^i,e_j \rangle = \delta_j^i[/tex]
equal to the Kronecker delta? The Kronecker delta equal 1 if the indices are the same, and zero if the indices are different. Let's say that the vector space, [itex]V[/itex] has [itex]n[/itex] dimensions and the dual space, [itex]V^*[/itex] has [itex]m[/itex] dimensions. Then,
[tex]\delta_j^i = 1 + 1 + \dots + 1 + 1^* + 1^* + \dots + 1^*[/tex]
where there are [itex]n[/itex] 1's in the first sum and [itex]m[/itex] 1*'s in the second sum. Therefore
[tex]\delta_j^i = n + m = \dim(V) + \dim(V^*)[/tex]
which should equal the product of the basis vectors, [itex]\epsilon^i[/itex] and [itex]e_j[/itex]. Could this be the reason?

In an orthonormal basis, [itex]e_i \cdot e_j = \delta_j^i[/itex]. This is not true in a general basis, only in an orthonormal basis.

[itex]\omega^1 e_1[/itex] is just different notation for [itex]e_1 \cdot e_1[/itex], so it will be unity only if the basis is normalized. Similarly only when the basis vectors are orthogonal will [itex] \omega^i e_j[/itex] be zero when i is not equal to j.
 
Last edited:
  • #38
Posted by Pervect.

Usually basis one-forms are written as...

What is a one-form?
 
  • #39
Oxymoron said:
What is a one-form?
A 1-form is a mapping (i.e. function) which maps vectors to scalars. If "a" is a 1-form and "B" a vector and "s" the scalar then the typical notation is

s = <a, B>

Pete
 
  • #40
Oxymoron said:
Because after all, they are by no means identical bases, even if the vector space and its dual are equal?

A vector space and its dual are not equal, but they have equal dimension. Any 2 vector spaces that have equal dimension are isomorphic, but without extra structure (like a metric), there is no natural basis independent isomorphism.

The Kronecker delta equal 1 if the indices are the same, and zero if the indices are different.

Yes.

Lets say that the vector space, [itex]V[/itex] has [itex]n[/itex] dimensions and the dual space, [itex]V^*[/itex] has [itex]m[/itex] dimensions. Then,
[tex]\delta_j^i = 1 + 1 + \dots + 1 + 1^* + 1^* + \dots + 1^*[/tex]
where there are [itex]n[/itex] 1's in the first sum and [itex]m[/itex] 1*'s in the second sum. Therefore
[tex]\delta_j^i = n + m = \dim(V) + \dim(V^*)[/tex]

Careful - this isn't true.

My last question for now is, why is the product
[tex]\langle \epsilon^i,e_j \rangle = \delta_j^i[/tex]
equal to the Kronecker delta?

Given an n-dimensional vector space [tex]V[/tex], the dual space [tex]V*[/tex] is defined as

[tex]V* = \left\{f: V \rightarrow \mathbb{R} | f \mathrm{is linear} \right\}.[/tex]

The action of any given linear between vector spaces is pinned down by finding/defining its action on a basis for the vector space that is the domain of the mapping. A dual vector is a linear mapping between the vector spaces [tex]V[/tex] and [tex]V*[/tex], so this is true in this case.

Let [tex]\left\{ e_{1}, \dots , e_{n} \right\}[/tex] be a basis for [tex]V[/tex], and define [tex]\omega^{i}[/tex] by: 1) [tex]\omega^{i} : V \rightarrow \mathbb{R}[/tex] is linear; 2) [tex]\omega^{i} \left( e_{j} \right) = \delta^{i}_{j}[/tex]. Now let [tex] v = v^{j} e_{j}[/tex] (summation convention) be an arbitrary element of [tex]V[/tex]. Then

[tex]\omega^{i} \left( v \right) = \omega^{i} \left( v^{j} e_{j} \right) = v^{j} \omega^{i} \left( e_{j} \right) = v^{j} \delta^{i}_{j} = v^{i}.[/tex]

Consequently, [tex]\omega^{i}[/tex] is clearly an element of [tex]V*[/tex], and [tex]\omega^{i} \left( e_{j} \right) = \delta^{i}_{j}[/tex] by definition!

Exercise: prove that [tex]\left\{\omega_{1}, \dots , \omega_{n} \right\}[/tex] is a basis for the vector space [tex]V*[/tex].

What is a one-form?

I like to make a distinction between a tensor and a tensor field. A tensor field (of a given type) on a diifferentiable manifold [tex]M[/tex] is the smooth assignment of a tensor at each [tex]p \in M[/tex].

A one-form is a dual vecor field. Note, however, that some references call a dual vector a one-form. See the thread "https://www.physicsforums.com/showthread.php?t=96073"" in the Linear & Abstract Algebra forum. I tried to sum up the situation in posts #11 and #23.

Regards,
George
 
Last edited by a moderator:
  • #41
Posted by George Jones:

Careful - this isn't true.

I read what I wrote again and it sounds wrong indeed. For one, how can the dimension of [itex]V[/itex] be different from [itex]V^*[/itex] as I have implied by m and n. However, I would like some clarification on the incorrectness it.

Exercise: prove that [itex]\{\omega_1,\dots,\omega_n\}[/itex] is a basis for the vector space .

Well, each [itex]\omega^i[/itex] is linearly independant of the dual basis vectors, [itex]\epsilon^i[/itex] from what you wrote, that

[tex]\omega^i\epsilon^j = \delta^_j[/tex]

Im not sure how to show that they span though, I mean, I could probably do it, but I am not sure which vectors and bases and stuff to use.

Posted by George Jones:

A one-form is a dual vecor field. Note, however, that some references call a dual vector a one-form. See the thread "One-forms" in the Linear & Abstract Algebra forum. I tried to sum up the situation in posts #11 and #23.

I read that, and it makes sense, especially as I was almost up to that point.If you have a tensor of type [itex](0,2)[/itex] then it can be written as

[tex]T:V\times V \rightarrow \mathbb{R}[/tex]

which is covariant. If [itex]\omega[/itex] and [itex]\rho[/itex] are elements of [itex]V^*[/itex] (which means that they are simply linear functionals over [itex]V[/itex] right?) then we can define their 'tensor' product as

[tex]\omega \otimes \rho (u,v) = \omega(u) \rho(v)[/tex]

At first this was hard to get my head around. My first thought was, [itex]\omega \otimes \rho[/itex] was multiplied by [itex](u,v)[/itex] and so what was this [itex](u,v)[/itex] thing? But then I though this is just like

[tex]\phi(u,v) = \phi(u)\phi(v)[/tex]

in group theory - its just a mapping! Where [itex]\omega \otimes \rho[/itex] was the 'symbol' (or [itex]\phi[/itex] in this case) representing the tensor product acting on two variables from [itex]V \times V[/itex].

My next question at this stage is "how does one define a basis on [itex]V\times V[/itex] (can I assume that the notation [itex]V^{(0,2)}[/itex] means [itex]V \times V[/itex]?

Well, if [itex]V[/itex] has dimension [itex]n[/itex], then [itex]V^{(0,2)}[/itex] has dimension [itex]n^2[/itex]. So let [itex]\epsilon^i[/itex] be a basis of [itex]V[/itex], then

[tex]\epsilon^i \otimes \epsilon^j[/tex]

forms a basis for [itex]V^{(0,2)}[/itex], yes?

Now, is [itex]\epsilon^i \otimes \epsilon^j[/itex] a tensor?My main issue with dealing with these basis vectors is I want to define the metric tensor next, and I am thinking that a sound understanding of how to define bases for these vector spaces and tensors is a logical stepping stone.
 
Last edited by a moderator:
  • #42
Oxymoron said:
.
My next question at this stage is "how does one define a basis on [itex]V\times V[/itex] (can I assume that the notation [itex]V^{(0,2)}[/itex] means [itex]V \times V[/itex]?

I've never seen that notation used, at least in physics.

Well, if [itex]V[/itex] has dimension [itex]n[/itex], then [itex]V^{(0,2)}[/itex] has dimension [itex]n^2[/itex]. So let [itex]\epsilon^i[/itex] be a basis of [itex]V[/itex], then
[tex]\epsilon^i \otimes \epsilon^j[/tex]
forms a basis for [itex]V^{(0,2)}[/itex], yes?

In tensor noation we use subscripts for vectors, so we'd usually write that [itex] e_i [/itex] is a basis of V (we would write [itex]\omega^i [/itex] as a basis of V*)
Now, is [itex]\epsilon^i \otimes \epsilon^j[/itex] a tensor?

[itex] e_i \otimes e_j [/itex] is an element of [itex] V \otimes V[/itex], not a map from [itex]V \otimes V [/itex] to a scalar.
 
Last edited:
  • #43
Oxymoron said:
However, I would like some clarification on the incorrectness it.

As you said,
[tex]\delta^{i}_{j} = \left\{\begin{array}{cc}0,&\mbox{ if } i \neq j \\1, & \mbox{ if } i = j \end{array}\right, [/tex]
but [tex]\delta^{i}_{j}[/tex] is not expressed as a sum. [tex]\delta^{i}_{j}[/tex] can be used in sums, e.g.,
[tex]\sum_{i = 1}^{n} \delta^{i}_{j} = 1,[/tex]
and
[tex]\sum_{i = 1}^{n} \delta^{i}_{i} = n.[/tex]
Im not sure how to show that they span though, I mean, I could probably do it, but I'm not sure which vectors and bases and stuff to use.

First, let me fill in the linear independence argument. [itex]\left\{\omega_{1}, \dots , \omega_{n} \right\}[/itex] is linearly independent if [itex]0 = c_{i} \omega^{i}[/itex] implies that the [itex]c_i = 0[/itex] for each [itex]i[/itex]. The zero on the left is the zero function, i.e., [itex]0(v) = 0[/itex] for [itex]v \in V[/itex]. Now let the equation take [itex]e_{j}[/itex] as an argument:
[tex]0 = c_{i} \omega^i \left( e_{j} \right) = c_{i} \delta^{i}_{j} = c_{j}.[/tex]
Since this is true for each [itex]j[/itex], [itex]\left\{\omega_{1}, \dots , \omega_{n} \right\}[/itex] is a linearly independent set of covectors.

Now show spanning. Let [itex]f : V \rightarrow \mathbb{R}[/itex] be linear. Define scalars [itex]f_{i}[/itex] by [itex]f_{i} = f \left( e_{i} \right)[/itex]. Then
[tex]f \left( v \right) = f \left( v^{i} e_{i} \right) = v^{i} f \left( e_{i} \right) = v^{i} f_{i} .[/tex]
Now show that [itex]f_{i} \omega^{i} = f[/itex]:
[tex]f_{i} \omega^{i} \left( v \right) = f_{i} \omega^{i} \left( v^{j} e_{j} \right) = f_{i} v^{j} \omega^{i} \left( e_{j} \right) = f_{i} v^{j} \delta^{i}_{j} = f_{i} v^{i} = f \left( v \right).[/tex]
Since this is true for every [itex]v[/itex], [itex]f_{i} \omega^{i} = f[/itex].

its just a mapping! Where [itex]\omega \otimes \rho[/itex] was the 'symbol' (or [itex]\phi[/itex] in this case) representing the tensor product acting on two variables from [itex]V \times V[/itex].

Exactly!

Becoming used to the abstractness of this approach takes a bit of time and effort.

My next question at this stage is "how does one define a basis on [itex]V\times V[/itex] (can I assume that the notation [itex]V^{(0,2)}[/itex] means [itex]V \times V[/itex]?

I think you mean [itex]V* \otimes V*[/itex]. [itex]V \times V[/itex] is the set of ordered pairs where each element of the ordered pairs comes from [itex]V[/itex]. As a vector space, this is the external direct sum of [itex]V[/itex] with itself. What you want is the space of all (0,2) tensors, i.e.,
[tex]V* \otimes V* = \left\{ T: V \times V \rightarrow \mathbb{R} | T \mathrm{is linear} \right\}.[/tex]
Well, if [itex]V[/itex] has dimension [itex]n[/itex], then [itex]V^{(0,2)}[/itex] has dimension [itex]n^2[/itex]. So let [itex]\epsilon^i[/itex] be a basis of [itex]V[/itex], then
[tex]\epsilon^i \otimes \epsilon^j[/tex]
forms a basis for [itex]V^{(0,2)}[/itex], yes?

Yes.

Now, is [itex]\epsilon^i \otimes \epsilon^j[/itex] a tensor?

Yes, [itex]\epsilon^i \otimes \epsilon^j[/itex] is, for [itex]i[/itex] and [itex]j[/itex], an element of [itex]V* \otimes V*[/itex]. [itex]\epsilon^i[/itex] and [itex]\epsilon^j[/itex] are specific examples of your [itex]\omega[/itex] and [itex]\rho[/itex].

Regards,
George
 
Last edited:
  • #44
Thanks George and Pervect. Your answers helped me a lot. So much in fact I have no further queries on that. Which is good.

But now I want to move along and talk about the metric tensor. Is a metric tensor similar to metric functions in say, topology or analysis? Do they practically do the same thing? That is, define a notion of distance between two objects? Or are they something completely abstract?

I had a quick look over the metric tensor and there seems to be several ways of writing it. The first method was to introduce an inner product space. Then to define a functional as

[tex]g\,:\,V\times V \rightarrow \mathbb{R}[/tex]

defined by

[tex]g(u,v) = u\cdot v[/tex]

As we have already discussed this is a bilinear covariant tensor of degree two. Now this is not the general metric tensor I have read about, instead this is the metric tensor of the inner product. Are the two different? Or is the metric tensor intertwined with some sort of inner product always?

I understand that to introduce the idea of a metric we need some sort of mathematical tool which represents distance. In this case the inner product usually represents 'length' of an element. Is this the reason for introducing the metric tensor like this? Could you go further and instead of an inner product, simply define the metric tensor as an arc length or something general like this?
 
  • #45
What is an inner product? I ask this because I want to compare and contrast whatever answer you give with a "metric" tensor.

Regards,
George
 
  • #46
In my understanding to have an inner product you need a vector space [itex]V[/itex] over the field [itex]\mathbb{R}[/itex]. Then the inner product over the vector space is a bilinear mapping:

[itex]g\,:\,V\times V \rightarrow \mathbb{R}[/itex]

which is symmetric, distributive, and definite.
 
  • #47
So, isn't an inner product on a vector space [itex]V[/itex] a (0,2) tensor, i.e., an element of [itex]V* \otimes V*[/itex]?

Too exhausted to say more - took my niece and nephew skating (their first time; 3 and 4 years old), and pulling them around the ice completely drained me.

Regards,
George
 
Last edited:
  • #48
Well, I think of distances as being quadratic forms. Quadratic forms are in a one-one correspondence with symmetric bilinear forms.

http://mathworld.wolfram.com/SymmetricBilinearForm.html

the defintion of which leads you directly to your defintion [itex] M : V \otimes V -> \mathbb{R}[/itex], except for the requirement of symmetry.

There's probably something deep to say about symmetry, but I'm not quite sure what it is. In GR we can think of the metric tensor as always being symmetric, so if you accept the symmetry as being a requirement, you can go directly from quadratic forms to symmetric bilinear forms.

Of course you have to start with the assumption that distances are quadratic forms, I'm not sure how to justify something this fundamental offhand.

[add]
I just read there may be a very small difficulty with the above argument, see for instance

http://www.answers.com/topic/quadratic-form
 
Last edited:
  • #49
The metric tensor helps us "lower" or "raise" indices, thus allowing us to make scalars out of tensors. For example, say we want a scalar out of two rank 1 tensors [itex]A^\mu, B^\nu[/itex]. We can go for

[tex]g_{\mu\nu}A^\mu B^\nu.[/tex]

This is usually the inner product between A and B.

EDIT: The metric has other important functions too.
 
Last edited:
  • #50
Hi,

It seems nobody answered yet a particular part of your original question which is wether the geometry of spacetime can also be given a distance formulation. It can (in the achronal case), and here are the axioms:
(a) d(x,y) >= 0 and d(x,x) = 0
(b) d(x,y) > 0 implies that d(y,x) = 0.
(c) d(x,z) >= d(x,y) + d(y,z) if d(x,y)d(y,z) > 0

Notice that d can also take the value infinity. d gives a partial order defined by x < y if and only if d(x,y) > 0 as you can easily verify. There is an old approach based upon (a suitably differentiable and causally stable) d to general relativity which is the world function formulation by Synge.

Cheers,

Careful
 
  • #51
Posted by Careful:
It seems nobody answered yet a particular part of your original question which is wether the geometry of spacetime can also be given a distance formulation. It can (in the achronal case), and here are the axioms:

Thanks Careful. I was hoping there was some sort of metric structure on spacetime.

Posted by George Jones:

George Jones So, isn't an inner product on a vector space [itex]V[/itex] a (0,2) tensor, i.e., an element of [itex]V^* \otimes V^*[/itex]?

Well, after all that we have discussed, I suppose it is.

Posted by Masudr:

The metric tensor helps us "lower" or "raise" indices, thus allowing us to make scalars out of tensors.

So what you are saying is, I can turn a covariant tensor into a contravariant tensor (or a vector into a dual vector) by multiplying it by metric tensor.
 
Last edited:
  • #52
Oxymoron said:
TI can turn a covariant tensor into a contravariant tensor (or a vector into a dual vector) by multiplying it by metric tensor.

Let [itex]v[/itex] be a 4-vector. Use the metric [itex]g[/itex] to define the covector [itex]\tilde{v}[/itex] associated with [itex]v[/itex]: for every 4-vector [itex]w[/itex]
[tex]\tilde{v} \left( w \right) := g \left( v , w \right).[/tex]
This is the abstract, index-free version of index lowering. To see this, let [itex]\left\{ e_{1}, \dots , e_{n} \right\}[/itex] be a basis for [itex]V[/itex], and let [itex]\left\{ e^{1}, \dots , e^{n} \right\}[/itex] be the associated basis for [itex]V*[/itex]. Write [itex]g_{ij} = g \left( e_{i} , e_{j} \right)[/itex].

Write [itex]v[/itex] in terms of the basis for [itex]V[/itex],
[tex]v = v^{i} e_{i},[/tex]
and [itex]\tilde{v}[/itex] in terms of the basis for [itex]V*[/itex],
[tex]\tilde{v} = v_{i} e^{i},[/tex].
Then,
[tex]\tilde{v} \left( e_{j} \right) = v_{i} e^{i} \left( e_{j} \right) = v_{i} \delta_{j}^{i} = v_{j}.[/tex]
But, by definition,
[tex]\tilde{v} \left( e_{j} \right) = g \left( v , e_{j} \right) = g \left( v^{i} e_{i} , e_{j} \right) = v^{i} g \left( e_{i} , e_{j} \right) = v^{i} g_{ij}.[/tex]
Combining these results gives
[tex]v_{j} = v^{i} g_{ij}.[/tex]
Much more to follow.

Regards,
George
 
Last edited:
  • #53
SUMMARY SO FAR...

A tensor is a multilinear map. In fact, it is just a generalization of a linear functional that was in linear algebra. Therefore, a tensor, in it barest form, is

[tex]T\,:\,V_1 \times V_2 \times \dots \times V_N \times V^*_1 \times V^*_2 \times \dots \times V^*_M \rightarrow \mathbb{R}[/tex]

where [itex]V_1,V_2,\dots,V_N[/itex] are vector spaces over a field [itex]\mathbb{R}[/itex] and [itex]V^*_1,V^*_2,\dots,V^*_M[/itex] are the corresponding dual vector spaces over the same field.

The tensor written above is a tensor of type (M,N). As a result, a tensor of type (0,1) is simply a linear functional

[tex]T\,:\, V \rightarrow \mathbb{R}[/tex]

For every vector space [itex]V[/itex] there exists a dual vector space [itex]V^*[/itex] consisting of all linear functionals on [itex]V[/itex]. From now on we will refer to linear functionals on [itex]V[/itex] as covectors, or 1-forms.

Let [itex]V[/itex] be a finite dimensional vector space. Its dimension, [itex]n[/itex], is the number of basis vectors on [itex]V[/itex] which are needed to uniquely define a linear functional. Thus,

[tex]\dim(V) = n[/tex]

and there are [itex]n[/itex] basis vectors, [itex]\{e_1,e_2,\dots,e_n\}[/itex]. Likewise for the corresponding dual vector space [itex]V[/itex], whose dimension is [itex]\dim(V^*) = n[/itex]. The basis of the dual vector space consists of [itex]n[/itex] basis vectors [itex]\{\epsilon^1,\epsilon^2,\dots,\epsilon^n\}[/itex] satisfying

[tex]\epsilon_i(e_j) = \delta^i_j[/tex]

where [itex]d^i_j[/itex] is the Kronecker delta.

Now let's add more structure to our vector space by defining an inner product on it. As a result, [itex]V[/itex] becomes an inner product space with

[tex]g\,:\, V \times V \rightarrow \mathbb{R}[/tex]

defined as

[tex]g(u,v) = u\cdot v[/tex]

This bilinear functional is actually a covariant tensor of degree 2 or simply, the metric tensor. Covariant tensors are of the form:

[tex]T\,:\,V \times V \times \dots \times V \rightarrow \mathbb{R}[/tex]

and contravariant tensors are of the form

[tex]T\,:\,V^*\times V^* \times \dots \times V^* \rightarrow \mathbb{R}[/tex].

A tensor can also be symmetric as in the case of the metric tensor. Such tensors have the following property

[tex]g(u,v) = g(v,u)[/tex]

that is, if we reverse which way we operate on the elements the sign of the tensor does not change.

One of the fundamental concepts of tensors is the ability to raise and lower the indices. If we have a contravariant tensor, [itex]T^i[/itex] and [itex]g_{ij}[/itex], as we have seen, is a covariant tensor of degree 2. Then

[tex]g_{ij}T^i = T_j[/tex]

and we say that taking the inner product with the metric tensor has lowered a contravariant index to a covariant index. It can be shown that [itex]g[/itex] is invertible, so

[tex]g^{ij}T_j = T^i[/tex]

where [itex]g^{ij} = (g_{ij})^{-1}[/itex].
 
Last edited:
  • #54
Oxymoron said:
SUMMARY SO FAR...
A tensor is a multilinear map. In fact, it is just a generalization of a linear functional that was in linear algebra. Therefore, a tensor, in it barest form, is
[tex]T\,:\,V_1 \times V_2 \times \dots \times V_N \times V^*_1 \times V^*_2 \times \dots \times V^*_M \rightarrow \mathbb{R}[/tex]
where [itex]V_1,V_2,\dots,V_N[/itex] are vector spaces over a field [itex]\mathbb{R}[/itex] and [itex]V^*_1,V^*_2,\dots,V^*_M[/itex] are the corresponding dual vector spaces over the same field.
The tensor written above is a tensor of type (M,N). As a result, a tensor of type (0,1) is simply a linear functional
[tex]T\,:\, V \rightarrow \mathbb{R}[/tex]
For every vector space [itex]V[/itex] there exists a dual vector space [itex]V^*[/itex] consisting of all linear functionals on [itex]V[/itex]. From now on we will refer to linear functionals on [itex]V[/itex] as covectors, or 1-forms.
Let [itex]V[/itex] be a finite dimensional vector space. Its dimension, [itex]n[/itex], is the number of basis vectors on [itex]V[/itex] which are needed to uniquely define a linear functional. Thus,
[tex]\dim(V) = n[/tex]
and there are [itex]n[/itex] basis vectors, [itex]\{e_1,e_2,\dots,e_n\}[/itex]. Likewise for the corresponding dual vector space [itex]V[/itex], whose dimension is [itex]\dim(V^*) = n[/itex]. The basis of the dual vector space consists of [itex]n[/itex] basis vectors [itex]\{\epsilon^1,\epsilon^2,\dots,\epsilon^n\}[/itex] satisfying
[tex]\epsilon_i(e_j) = \delta^i_j[/tex]
where [itex]d^i_j[/itex] is the Kronecker delta.

This is very good, and extremely well written, except for a minor but potentially important detail

A basis is , in my understanding, is defined as a set of vectors that are linearly independent and span the space - a basis does not have to be orthonormal, e.g.

http://mathworld.wolfram.com/VectorSpaceBasis.html

Thus it is not necessarily true that [itex]e_i \cdot e_j = \delta^i_j[/itex].

IF [itex]e_i \cdot e_j = \delta^i_j [/itex] THEN [itex] \epsilon^i(e_j) = \delta^i_j[/itex]

So the above statement is not necessarily true, all we need to say at this point is that the [itex]\epsilon^i[/itex] span the dual space and are linearly independent, any other claim is imposing more structure on them than is necessarily true.

Now let's add more structure to our vector space by defining an inner product on it.

After we've done this, we can talk about the e_i being orthonormal, and we can also make the remark that this is equivalent to [itex]\epsilon^i (e_j) = \delta^i_j[/itex]
 
Last edited:
  • #55
After we've done this, we can talk about the e_i being orthonormal, and we can also make the remark that this is equivalent to [itex]\epsilon^i (e_j) = \delta^i_j[/itex]

You're right. I should have written this first. Thankyou for the compliments though.

Before I go on, I think I left something out. We can form a tensor space by collecting all tensors of a fixed type [itex](r,s)[/itex]. This space is actually a vector space, and the tensors can be added and subtracted together by real numbers. The problem for our tensor space is in defining a basis for it. This is where we need a new operation called the tensor product, denoted as [itex]\otimes[/itex].

If [itex]T[/itex] is a [itex](k,l)[/itex] tensor and [itex]S[/itex] is a [itex](m,n)[/itex] tensor we define [itex]T\otimes S[/itex] as a [itex](k+m,l+n)[/itex] tensor, by

[tex]T\otimes S (\omega^1,\dots,\omega^k,\dots,\omega^{k+m},V^1,\dots,V^l,\dots,V^{l+n})[/tex]
[tex] = T(\omega^1,\dots,\omega^k,V^1,\dots,V^l)S(\omega^{k+1},\dots,\omega^{k+m},V^{l+1},\dots,V^{l+n})[/tex]

where [itex]\omega^i[/itex] and [itex]V^i[/itex] are distinct dual vectors and vectors. That is, we define the tensor product of [itex]T[/itex] and [itex]S[/itex] by first acting [itex]T[/itex] on the appropriate set of dual vectors and vectors, and then act [itex]S[/itex] on the remainder, and multiply everything together. But this operation is [itex]not[/itex] commutative.

We can now construct a basis for the space of all fixed type tensors, by taking the tensor product of basis vectors and dual vectors. The resulting basis will consist of all the tensors of the following form:

[tex]e_{i_1}\otimes \dots \otimes e_{i_k} \otimes \epsilon^{j_1} \otimes \dots \otimes \epsilon^{j_l}[/tex]

Therefore, every tensor [itex]T[/itex] of the fixed type [itex](k,l)[/itex] has a unique expansion:

[tex]T = T^{i_1\dots i_k}_{j_1\dots j_l} e_{i_1} \otimes \dots \otimes e_{i_k} \otimes \epsilon^{j_1} \otimes \dots \otimes \epsilon^{j_l}[/tex]

where [itex]T^{i_1\dots i_k}_{j_1\dots j_l} = T(\epsilon^{i_1},\dots,\epsilon^{i_r}, e_{j_1}, \dots, e_{j_s})[/itex], which are simply the components of the tensor with respect to the basis of [itex]V[/itex].

But expressing a tensor [itex]T[/itex] as [itex]T^{i_1\dots i_k}_{j_1\dots j_l} e_{i_1}[/itex] is just like expressing a vector by its components - merely a shortcut.
 
Last edited:
  • #56
Yes that looks good and fairly complete, so far. Some of my understanding might also help.

A nice thing to remember is that components of a vector transform like a vector, but the basis vectors transform like components of a covector; since the vector is actually the components multiplied by the basis vector, our vector is invariant! Similar considerations apply to covector components and basis covectors (although, the opposite, of course). And that's why, after all, tensors are objects which do not change under co-ordinate transformations (don't get confused: the components transform, but the basis transforms in the opposite way).

Another thing to note is that often the simplest basis for a vector space at some point is the co-ordinate basis, namely the vectors pointing along the co-ordinate lines at any point. And a covector basis can be defined as those linear functionals, when acting on the corresponding co-ordinate basis, gives us the Kronecker delta.

PS. I should formalise this in LaTeX, but it's pretty late here.
 
  • #57
Oxymoron said:
You're right. I should have written this first.

Actually, in the spite of the presence of a Kronecker delta, the relation [itex]\epsilon^i (e_j) = \delta^i_j[/itex] has nothing to do either with orthogonality, or with a metric tensor. Given a basis for a vector space [itex]V[/itex], this relation can always be used to define a basis for the vector space [itex]V*[/itex], the algebraic dual of [itex]V[/itex]. No metric tensor is needed. Also, the relation does not define a metric tensor.

Orthogonality is a condition between vectors in the same vector space, and the [itex]e_j[/itex] and the [itex]\epsilon^i[/itex] live in different vector spaces. The initial basis for [itex]V[/itex] need not be an orthonormal one in order to use the above relation to define a basis for [itex]V*[/itex].

The construction that I outlined towards the end of post #40 made no use of a metric tensor.

Regards,
George
 
Last edited:
  • #58
George Jones said:
Actually, in the spite of the presence of a Kronecker delta, the relation [itex]\epsilon^i (e_j) = \delta^i_j[/itex] has nothing to do either with orthogonality, or with a metric tensor. Given a basis for a vector space [itex]V[/itex], this relation can always be used to define a basis for the vector space [itex]V*[/itex], the algebraic dual of [itex]V[/itex]. No metric tensor is needed. Also, the relation does not define a metric tensor.

Why do you say that?

It seems to me that the relation does define a dot product, and hence the metric tensor, in a very natural way.

If [itex] \epsilon^j(e_i) = \delta^i_j[/itex], you have defined a particular mapping from basis vectors to basis co-vectors. [itex]e_i[/itex] is associated with [itex]\epsilon^i[/itex], the co-vector with the same index.

Now it is possible that you do not want to make use of this relationship, but if you don't want to use it, why specify it? I.e. [itex]\epsilon^j(e_i) = \delta^i_j[/itex] has the purpose of singling out a unique co-vector that is associated with a vector with the same index. If there is some other purpose for writing this relationship down, please enlighten me, because I'm missing the point :-(.

Given that we actually make use of this association, we now have a map from the basis vectors to the basis co-vectors - for every [itex]e_i[/itex], we have singled out a unique [itex]\epsilon^i[/itex] with the same index, thus we have assigned a unique co-vector to every basis vector.

Because an arbitrary vector can be defined by a weighted linear sum of basis vectors, we have also defined a map from every vector u to a unique dual vector, found by substituting all the [itex]e_i[/itex] with the [itex]\epsilon^i[/itex] and keeping the same linear weights.

Given a mapping between vectors and duals, we have defined a dot product.
Given two vectors u and v, we use the above mapping to find u*. u* is a map from a vector to a scalar. Applying this map u* to the vector v, we then have a scalar. Thus we have a mapping - a linear mapping, though I've skipped over proving this - from two vectors (u,v) to a scalar. This linear mapping between two vectors and a scalar defines the dot product, and a metric tensor.

[add]
Something I should add - in this dot product we have defined above, we can now ask - what is [itex]e_i \cdot e_j[/itex]? A little work shows the answer is [itex]\delta^i_j[/itex]. Thus our basis vectors are orthonormal.

The way I view the end result is that if we have a vector space, from any set of basis vectors we can form a consistent dot product in which those basis vectors are orthonormal. However, when dealing with physics, we have a physical notion of distance which imposes a particular dot product, one that arises from the physics (our ability to measure distances). Thus we restrict the mathematical notions of possible dot product to those that are physically meaningful.
 
Last edited:
  • #59
pervect said:
Why do you say that?

Let me start with an example. Let (V,g) be Minkowski vector space, and let {e_0, e_1, e_2, e_3} be an orthonormal(i.e, 1 = g(e_0 , e_0) = -g(e_1 , e_1) = -g(e_2 , e_2) = -g(e_3 , e_3)) basis. There is nothing that prohibits taking u = v in an inner product. Let u = v = e_0 + e_1. Using the bilinearity and symmetry of g gives

g(u,u) = g( e_0 + e_1 , e_0 + e_1) = g(e_0 , e_0) +2g(e_0 , e_0) + g(e_1 , e_1) = 1 + 0 + (-1) = 0.

u is a lightlike vector as expected.

Now calculate the inner product using your construction. This gives

(e^0 + e^1) (e_0 + e_1) = e^0(e_0 + e_1) + e^1(e_0 + e_1) = 1 + 0 + 0 + 1 = 2

Regards,
George
 
  • #61
My approach works for Minkowski vector spaces if one takes (1)

[tex]\epsilon^0(e_0) = -1 \hspace{.5 in} \epsilon^1(e_1) = 1 \hspace{.5 in} \epsilon^2(e_2) = 1 \hspace{.5 in} \epsilon^3(e_3) = 1[/tex]

Therfore this is what I always do, and thus I do not assume that (2) [itex] \epsilon^i(e_j) = \delta^i_j [/itex].

After a bit of head scratching, I think I can see where one _could_ assume that [itex] \epsilon^i(e_j) = \delta^i_j [/itex], but it strikes me as being very awkward and unnatural.

[add]
Basically, I'd rather keep [itex]\epsilon^i = g^{ij} e_j[/itex] , something that is true with (1) and not true with (2).
 
Last edited:
  • #62
I think the problem here (and please correct me if I'm wrong) is that one of you (pervect) is choosing to specifically use the co-ordinate basis (which requires the notion of calculus on the manifolds), which then gives you the Minkowski metric, and the other is using an approach before co-ordinates and calculus have been defined on the manifold.

Of course, any vector space has an infinite number of valid bases, so both are, of course, correct.
 
  • #63
pervect said:
My approach works for Minkowski vector spaces if one takes (1)
[tex]\epsilon^0(e_0) = -1 \hspace{.5 in} \epsilon^1(e_1) = 1 \hspace{.5 in} \epsilon^2(e_2) = 1 \hspace{.5 in} \epsilon^3(e_3) = 1[/tex]
I'm trying to understand your approach. Are you saying that (1), a relationship between a basis in V and a basis in the dual-space V*, defines the Minkowski metric [on V]?
 
  • #64
robphy said:
I'm trying to understand your approach. Are you saying that (1), a relationship between a basis in V and a basis in the dual-space V*, defines the Minkowski metric [on V]?

Exactly. First we note that a mapping of basis vectors between V and V* defines a general linear map from V to V*.

I.e. in a 2d space {e_0, e_1} we can represent an arbitrary vector v as [itex]\alpha e_0 + \beta e_1[/itex]. Now, since we have a map from the basis vectors [itex]e_i[/itex] of V to the basis vectors [itex]\epsilon^i[/itex] of V*, it's perfectly natural to define [itex]\alpha \epsilon^0 + \beta \epsilon^1[/itex] as a linear map from an arbitrary element v [itex]\in[/itex] V to some element of V*.

Let A be such a general mapping A : V -> V*. Then for any vector u in V, we have A(u) in V*. A(u) is a map from V -> [itex]\mathbb{R}[/itex] by the defintion of the dual space. Thus if we have two vectors, u and v in V, we can find (A(u))(v) which is a scalar, and bilinear in u and v. This defines a metric.

Another shorter way of saying this - the mapping A from V to V* is actually [itex]g^i{}_j[/itex] in tensor notation, a mixed tensor of rank (1,1). Defining [itex]g^i{}_j[/itex] is as valid a way of defining a metric as defining [itex]g_{ij}[/itex] or [itex]g^{ij}[/itex]

Furthermore, I'm of the opinion that if you have one set of mappings [itex]g^i{}_j[/itex] from V to V*, the ones defined by the tensor "raising/lowering" index conventions, it's very confusing to have a *different* set of mappings "hanging around" that don't follow the tensor "index" conventions, and that the set of mappings that make [itex]\epsilon^i(e_j) = \delta^i_j[/itex] is just such a set of different mappings. It will tend to confuse people, IMO, and I can't see any particular use for it.

I suppose I'm also requiring that g_ij be invertible, but the basis vectors are linearly independent by defintition, and that should be good enough to insure that an inverse exists.
 
Last edited:
  • #65
pervect: I concur with George Jones. The dual basis [itex]\epsilon^i[/itex] to a basis [itex]e_i[/itex] is a very, well, basic part of linear algebra. Among other things, it is the only basis such that, for any vector v, we have that [itex]v = \sum_i \epsilon^i(v) e_i[/itex].


Basically, I'd rather keep [itex]\epsilon^i = g^{ij} e_j[/itex] , something that is true with (1) and not true with (2).
That expression doesn't even make sense. The left hand side is an upper-indexed collection of covectors. The right hand side is an upper-indexed collection of vectors -- the types don't match.

To state it differently, each of the [itex]e_j[/itex]'s is a vector, and thus has an upper index, which I will denote with a dot. Each of the [itex]\epsilon^i[/itex]'s is a covector, and thus has a lower index, which I will denote with another dot. You're claiming that [itex]\epsilon^i_\cdot = g^{ij} e_j^\cdot[/itex], but the indices don't line up.



Another shorter way of saying this - the mapping A from V to V* is actually [itex]g^i{}_j[/itex] in tensor notation, a mixed tensor of rank (1,1)
No, it is not! This is an elementary fact about dual spaces: there does not exist a natural map from a vector space to its dual. Loosely speaking, everything transforms wrongly -- if I change coordinates in [itex]V[/itex] by doubling everything, I need to change coordinates in [itex]V^*[/itex] by halving everything. A tensor would do the wrong thing!


Remember that, in the way people usually do coordinates, the action of a covector [itex]\epsilon^i[/itex] on a vector [itex]e_j[/itex] is simply given by [itex]\epsilon^i(e_j) = \epsilon^i_k e^k_j[/itex] -- it is determined by a rank-(1,1) tensor. The only rank-(1,1) tensors whose components are coordinate-independent are the multiples of [itex]\delta^i_j[/itex], and thus [itex]\epsilon^i(e_j) = c \delta^i_j[/itex] are the only possible choices if you want the components of this expression to be independent of the coordinates.
 
Last edited:
  • #66
Ive been trying to follow this. From what I can gather, the definition of a metric tensor which I wrote about 12 posts ago was the "metric tensor of an inner product". I neglected to write inner product because, well, at that stage it was the only metric tensor I knew.

See I was considering a real inner product space [itex](V,\cdot)[/itex] with a map

[tex]g\,:\, V \times V \rightarrow \mathbb{R}[/tex]

defined by taking the inner product of two arguments:

[tex]g(u,v) = u\cdot v[/tex]

Then one could say that when the tensor [itex]g_{ij}[/itex] acts on two elements from [itex]V[/itex] such that

[tex]g_{ij}u^iv^j[/tex]

then this action is simply taking the inner product. That is

[tex]u\cdot v = g_{ij}u^iv^j [/tex]

In my opinion, the Kronecker delta, [itex]\delta^i_j[/itex] should be used with caution in general. For example, in a Euclidean inner product space with a metric tensor you can find an orthonormal basis [itex]\{e_1,\dots,e_n\}[/itex] such that

[tex]e_i \cdot e_j = g_{ij} = \delta_{ij}[/tex]

This way, [itex]\delta_{ij}[/itex] is a tensor and we have the special property

[tex]g_{ij} = \delta_{ij} = g^{ij}[/tex]

and so every tensor can be written with all its indices raised or lowered, since raising and lowering has no effect on the values of the components of the tensors. But, of course, you can only do this in Cartesian space where there is a good notion of orthonormality.

Im not sure If I have helped or not. I just wanted to clarify what I was talking about.
 
Last edited:
  • #67
pervect said:
Another shorter way of saying this - the mapping A from V to V* is actually [itex]g^i{}_j[/itex] in tensor notation, a mixed tensor of rank (1,1). Defining [itex]g^i{}_j[/itex] is as valid a way of defining a metric as defining [itex]g_{ij}[/itex] or [itex]g^{ij}[/itex]
Oxymoron said:
In my opinion, the Kronecker delta, [itex]\delta^i_j[/itex] should be used with caution in general.
Just a comment on the Kronecker delta [itex]\delta^i{}_j[/itex]...
this is sometimes called the "[abstract-]index substitution operator" since [itex]p^i=\delta^i{}_j p^j[/itex] and [itex]q_j=\delta^i{}_j q_i[/itex]. So, it seems to me that, since [itex]\delta^i{}_j=g^i{}_j=g^{ik}g_{kj}=(g^{-1})^{ik}(g)_{kj}[/itex], specifying [tex]g^i{}_j[/tex] cannot uniquely define a metric on V.
[edit]...unless, possibly, you pick out a preferred basis.
 
Last edited:
  • #68
Oxymoron said:
In my opinion, the Kronecker delta, [itex]\delta^i_j[/itex] should be used with caution in general.

I cannot emphasize strongly enough that, given a basis for a vector space [itex]V[/itex], there is no problem with using the Kronecker delta to define an associated dual basis for [itex]V*[/itex], the algebraic dual of [itex]V[/itex]. This is a very useful construction that is independent of the signature of any possible "metric" that is defined on [itex]V[/itex]. As Hurkyl says, this an oft used construction in (multi)linear algebra.

Given a basis [itex]\left\{ e_{i} \right\}[/itex] for [itex]V[/itex], I prefer to use a different symbol (as mentioned by pervect) for the associated basis of [itex]V*[/itex], i.e., define linear functionals on [itex]V[/itex] by [itex]\omega^{i} \left( e_{j} \right) = \delta^{i}_{j}[/itex]. Then each [itex]\omega^{i}[/itex] lives in [itex]V*[/itex], and [itex]\left\{ \omega^{i} \right\}[/itex] is a basis for [itex]V*[/itex].

Hurkyl gave a nice property for the basis [itex]\left\{ \omega^{i} \right\}[/itex] that exists even when there is no metric tensor defined on [itex]V[/itex]. When a (non-degenerate) metric tensor (of any signature!) is defined on [itex]V[/itex], the [itex]\left\{\omega^{i} \right\}[/itex] basis for [itex]V*[/itex] has another nice property. If the metric is used as a map from [itex]V[/itex] to [itex]V*[/itex], and if the components of a vector [itex]v[/itex] in [itex]V[/itex] with respect to the [itex]\left\{ e_{i} \right\}[/itex] basis of [itex]V[/itex] are [itex]v^{i}[/itex], then the components of the components of the covector that the metric maps to are [itex]v_{i}[/itex] with respect to the basis [itex]\left\{ \omega^{i} \right\}[/itex] for [itex]V*[/itex].

The of dual basis [itex]V*[/itex] defined using a Kronecker delta is quite important even the inner product on [itex]V[/itex] is not positive definite.

A reason for using different symbols for a basis for [itex]V*[/itex] that is dual to a basis for [itex]V[/itex] is as follows.

Let [itex]g[/itex] be an non-degenerate inner product (of any signature) defined on [itex]V[/itex], and define

[tex]e^{i} = g^{ij} e_{j}.[/tex]

For each [itex]i[/itex] and [itex]j[/itex], [itex]g^{ij}[/itex] is a real number, and therefore each [itex]e^{i}[/itex] is a linear combination of the elements in the basis [itex]\left\{ e_{i} \right\}[/itex] of [itex]V[/itex]. As such, each [itex]e^{i}[/itex] is an element of [itex]V[/itex], i.e., a vector, and not an element of [itex]V*[/itex], i.e., not a covector. This is true in spite of the fact that each [itex]e^{i}[/itex] transforms "the wrong way to be a vector".

I gave a short outline of the connection between the abstract multilinear algebra approach to tensors and the transformation approach in this https://www.physicsforums.com/showthread.php?t=105868".

A metric tensor can be defined without using bases. I am working on a long post about Minkowski spacetime that might bring this thread full circle back to its beginning, and might be amenable to Oxymoron's math(s) background. It starts by defining a Minkowski vector space.

Minkowski spacetime [itex]\left( V,\mathbf{g}\right)[/itex] is a 4-dimensional vector space [itex]V[/itex] together with a symmetric, non-degenerate, bilinear mapping [itex]g:V\times V\rightarrow\mathbb{R}[/itex]. A vector in [itex]V[/itex] is called a 4-vector, and a 4-vector [itex]v[/itex] is called timelike if [itex]g\left(v,v\right) >0[/itex], lightlike if [itex]g\left(v,v\right) =0[/itex], and spacelike if [itex]g\left(v,v\right) <0[/itex]. [itex]\left( V,g\right)[/itex] is such that: 1) timelike vectors exist; 2) [itex]v[/itex] is spacelike whenever [itex]u[/itex] is timelike and [itex]g\left( u,v\right)=0[/itex].

Regards,
George
 
Last edited by a moderator:
  • #69
Before I go on, just a small clarification. So far we have been referring to the metric tensor by [itex]g_{ij}[/itex]. Now, if we move to Minkowski spacetime, is the metric tensor given by [itex]\eta_{\mu\nu}[/itex]?

If we consider spacetime, the the action of the metric [itex]\eta_{\mu\nu}[/itex] on two arbitrary vectors [itex]v,w[/itex] is basically an inner product isn't it? Since

[tex]\eta(v,w) = \eta_{\mu\nu}v^{\mu}w^{\nu} = v\cdot w[/tex]

Now, since inner products between two vectors always give some scalar, and since a scalar is an index-free entity, they must remain invariant under any sort of Lorentz transformation?

Now let's turn this effect in on itself and take the inner product of a vector with itself. In Euclidean space, such an inner product is, of course, the norm of a vector, and it is always positive. However, in spacetime, this is not the case? Because

[tex]\eta_{\mu\nu}v^{\mu}v^{\nu} = \left\{ \begin{array}{c}
< 0 \quad \mbox{timelike}\\
= 0 \quad \mbox{lightlike}\\
> 0 \quad \mbox{spacelike}
\end{array}\right.[/tex]

So from such a tensor, we may define the Kronecker delta (which is a tensor of type (1,1)) as

[tex]\delta_{\mu}^{\rho} = \eta_{\rho\nu}\eta^{\nu\mu} = \eta^{\mu\nu}\eta_{\nu\rho}[/tex]

Is this a sufficient derivation of the Kronecker delta in spacetime? By the way, am I correct in using Greek indices when referring to spacetime coordinates?

If I transform the spacetime metric tensor [itex]\eta[/itex] will its components change IF I consider only flat spacetime? Is the same true for the Kronecker delta in flat spacetime? What will happen if spacetime is not flat?
 
Last edited:
  • #70
Oxymoron said:
So far we have been referring to the metric tensor by [itex]g_{ij}[/itex].

Some people use this notation (abstract index notation) for a tensor, while others don't. Some people choose to interpret [itex]g_{ij}[/itex] as components of a tensor with respect to a given basis, as I did in post #52. Then each [itex]g_{ij}[/itex] is a real number. I have mixed feelings on the subject. Regardless of one's choice of notation, there is an important distinction to be made between a tensor, and the components of a tensor with respect to a basis.

Now, if we move to Minkowski spacetime, is the metric tensor given by [itex]\eta_{\mu\nu}[/itex]?

This notation is often, but not exclusively, used.

the action of the metric [itex]\eta_{\mu\nu}[/itex] on two arbitrary vectors [itex]v,w[/itex] is basically an inner product isn't it? Since
[tex]\eta(v,w) = \eta_{\mu\nu}v^{\mu}w^{\nu} = v\cdot w[/tex]

Here, you're treating [itex]\eta_{\mu\nu}[/itex], [itex]v^{\mu}[/itex], and [itex]w^{\nu}[/itex] as real numbers. This can't be done without first choosing a basis. Another important point: the metric exists without choosing a basis. See the bottom of post #68. A interesting and some what challenging exercise is to show that this definition implies the existence of orthonormal bases. Note that I have used (purely as a matter of personal choice) the opposite signature to you. It seems I am in the minority with respect to this on this forum.

Now, since inner products between two vectors always give some scalar, and since a scalar is an index-free entity, they must remain invariant under any sort of Lorentz transformation?

This is the *definition* of a Lorentz transformation. Given [itex]g[/itex] (or if you prefer, [itex]\eta[/itex]), a Lorentz transformation is a linear mapping [itex]L: V \rightarrow V[/itex] such that

[tex]g \left( Lv, Lv \right) = g \left( v, v \right).[/tex]

Exercise: show that this implies [itex]g \left( Lu, Lv \right) = g \left( u, v \right)[/itex] for every [itex]u[/itex] and [itex]v[/itex] in [itex]V[/itex].

From this defintion it follows that a Lorentz transformation maps an orthonormal basis to another orthomormal basis.

Is this a sufficient derivation of the Kronecker delta in spacetime?

I perfer to think of it this way. The Kronecker delta is a mathematic object defined to be zero of indices are not equal and one if indices are equal. From the definition of [itex]\eta^{\nu\mu}[/itex], it follows that

[tex]\delta_{\mu}^{\rho} = \eta_{\rho\nu}\eta^{\nu\mu} = \eta^{\mu\nu}\eta_{\nu\rho}[/tex]

This leads to (but does not beg!) the question: How are the [itex]\eta^{\nu\mu}[/itex] defined?

By the way, am I correct in using Greek indices when referring to spacetime coordinates?

Again, this notational convention is often, but not exclusively, used. For example, Wald's well-known text on general relativity uses latin indices to think of [itex]T_{ij}[/itex] as a tensor, and of [itex]T_{\mu\nu}[/itex] as the components of a tensor with respect to a given basis. Both sets of indices in this book run over all of spacetime. This part of the abstract index notation to which I referred above.

If I transform the spacetime metric tensor [itex]\eta[/itex] will its components change IF I consider only flat spacetime? ?

In general, yes! Think of the change from inertial to spherical coordinates, etc.

Regards,
George

PS I know you're being overwhelmed by details, and by different points of view, but you're doing great.
 
Last edited:

Similar threads

  • Special and General Relativity
Replies
12
Views
1K
Replies
40
Views
2K
  • Special and General Relativity
Replies
25
Views
2K
  • Special and General Relativity
6
Replies
186
Views
7K
  • Special and General Relativity
Replies
29
Views
1K
  • Special and General Relativity
4
Replies
123
Views
5K
  • Special and General Relativity
Replies
26
Views
384
  • Special and General Relativity
Replies
7
Views
1K
  • Special and General Relativity
Replies
3
Views
978
  • Special and General Relativity
Replies
5
Views
1K
Back
Top